Penalized Interaction Estimation for Ultrahigh Dimensional Quadratic Regression

نویسندگان

چکیده

Quadratic regression goes beyond the linear model by simultaneously including main effects and interactions between covariates. The problem of interaction estimation in high dimensional quadratic has received extensive attention past decade. In this article we introduce a novel method which allows us to estimate separately. Unlike existing methods for ultrahigh regressions, our proposal does not require widely used heredity assumption. addition, proposed estimates have explicit formulas obey invariance principle at population level. We matrix form under penalized convex loss function. resulting are shown be consistent even when covariate dimension is an exponential order sample size. develop efficient ADMM algorithm implement estimation. This fully explores cheap computational cost multiplication much more than such as all pairs LASSO. demonstrate promising performance through numerical studies.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Variance estimation using refitted cross-validation in ultrahigh dimensional regression.

Variance estimation is a fundamental problem in statistical modelling. In ultrahigh dimensional linear regression where the dimensionality is much larger than the sample size, traditional variance estimation techniques are not applicable. Recent advances in variable selection in ultrahigh dimensional linear regression make this problem accessible. One of the major problems in ultrahigh dimensio...

متن کامل

Penalized Composite Quasi-Likelihood for Ultrahigh-Dimensional Variable Selection.

In high-dimensional model selection problems, penalized least-square approaches have been extensively used. This paper addresses the question of both robustness and efficiency of penalized model selection methods, and proposes a data-driven weighted linear combination of convex loss functions, together with weighted L(1)-penalty. It is completely data-adaptive and does not require prior knowled...

متن کامل

Nonparametric regression estimation using penalized least squares

We present multivariate penalized least squares regression estimates. We use Vapnik{ Chervonenkis theory and bounds on the covering numbers to analyze convergence of the estimates. We show strong consistency of the truncated versions of the estimates without any conditions on the underlying distribution.

متن کامل

Refitted Cross-validation in Ultrahigh Dimensional Regression

Variance estimation is a fundamental problem in statistical modeling. In ultrahigh dimensional linear regressions where the dimensionality is much larger than sample size, traditional variance estimation techniques are not applicable. Recent advances on variable selection in ultrahigh dimensional linear regressions make this problem more accessible. One of the major problems in ultrahigh dimens...

متن کامل

Penalized Bregman divergence for large-dimensional regression and classification.

Regularization methods are characterized by loss functions measuring data fits and penalty terms constraining model parameters. The commonly used quadratic loss is not suitable for classification with binary responses, whereas the loglikelihood function is not readily applicable to models where the exact distribution of observations is unknown or not fully specified. We introduce the penalized ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Statistica Sinica

سال: 2021

ISSN: ['1017-0405', '1996-8507']

DOI: https://doi.org/10.5705/ss.202019.0081